VLCA: vision-language aligning model with cross-modal attention for bilingual remote sensing image captioning

نویسندگان

چکیده

In the field of satellite imagery, remote sensing image captioning (RSIC) is a hot topic with challenge overfitting and difficulty text alignment. To address these issues, this paper proposes vision-language aligning paradigm for RSIC to jointly represent vision language. First, new dataset DIOR-Captions built augmenting object detection in optical (DIOR) images manually annotated Chinese English contents. Second, Vision-Language model Cross-modal Attention (VLCA) presented generate accurate abundant bilingual descriptions images. Third, cross-modal learning network introduced problem visual-lingual Notably, VLCA also applied end-to-end captions generation by using pre-training language Chinese. The experiments are carried out various baselines validate on proposed dataset. results demonstrate that algorithm more descriptive informative than existing algorithms producing captions.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Image Captioning with Attention

In the past few years, neural networks have fueled dramatic advances in image classi cation. Emboldened, researchers are looking for more challenging applications for computer vision and arti cial intelligence systems. They seek not only to assign numerical labels to input data, but to describe the world in human terms. Image and video captioning is among the most popular applications in this t...

متن کامل

Text-Guided Attention Model for Image Captioning

Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns t...

متن کامل

Image Captioning using Visual Attention

This project aims at generating captions for images using neural language models. There has been a substantial increase in number of proposed models for image captioning task since neural language models and convolutional neural networks(CNN) became popular. Our project has its base on one of such works, which uses a variant of Recurrent neural network coupled with a CNN. We intend to enhance t...

متن کامل

Recurrent Highway Networks with Language CNN for Image Captioning

Language models based on recurrent neural networks have dominated recent image caption generation tasks. In this paper, we introduce a language CNN model which is suitable for statistical language modeling tasks and shows competitive performance in image captioning. In contrast to previous models which predict next word based on one previous word and hidden state, our language CNN is fed with a...

متن کامل

Attention Correctness in Neural Image Captioning

Attention Map Visualization We visualize the attention maps of both the implicit attention model and our supervised attention model on the Flickr30k test set. As mentioned in the paper, 909 noun phrases are aligned for the implicit model and 901 for the supervised model. 635 of these alignments are common for both, and 595 of them have corresponding bounding boxes. Here we present a subset due ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Chinese Journal of Systems Engineering and Electronics

سال: 2023

ISSN: ['1004-4132']

DOI: https://doi.org/10.23919/jsee.2023.000035